510 research outputs found

    Fair Labor Practices: Are Workers Truly Protected?

    Get PDF
    Private businesses often claim that they employ fair labor practices and care about employees\u27 well being. However, firms may face contradictory situations since fair labor practices can result in cost increases and lower profit. In this thesis, I examine historic and contemporary workers’ conditions and fair labor practices. Workers’ conditions during the Industrial Revolution and the basis of relevant Marxist theories are examined. Contemporary labor practices are also examined and a secondary analysis of data on worker attitudes in two major companies is conducted. Results and discussion suggest that although progresses in worker conditions have been made, there still needs to be more improvements

    BehAVExplor: Behavior Diversity Guided Testing for Autonomous Driving Systems

    Full text link
    Testing Autonomous Driving Systems (ADSs) is a critical task for ensuring the reliability and safety of autonomous vehicles. Existing methods mainly focus on searching for safety violations while the diversity of the generated test cases is ignored, which may generate many redundant test cases and failures. Such redundant failures can reduce testing performance and increase failure analysis costs. In this paper, we present a novel behavior-guided fuzzing technique (BehAVExplor) to explore the different behaviors of the ego vehicle (i.e., the vehicle controlled by the ADS under test) and detect diverse violations. Specifically, we design an efficient unsupervised model, called BehaviorMiner, to characterize the behavior of the ego vehicle. BehaviorMiner extracts the temporal features from the given scenarios and performs a clustering-based abstraction to group behaviors with similar features into abstract states. A new test case will be added to the seed corpus if it triggers new behaviors (e.g., cover new abstract states). Due to the potential conflict between the behavior diversity and the general violation feedback, we further propose an energy mechanism to guide the seed selection and the mutation. The energy of a seed quantifies how good it is. We evaluated BehAVExplor on Apollo, an industrial-level ADS, and LGSVL simulation environment. Empirical evaluation results show that BehAVExplor can effectively find more diverse violations than the state-of-the-art

    Evaluating AIGC Detectors on Code Content

    Full text link
    Artificial Intelligence Generated Content (AIGC) has garnered considerable attention for its impressive performance, with ChatGPT emerging as a leading AIGC model that produces high-quality responses across various applications, including software development and maintenance. Despite its potential, the misuse of ChatGPT poses significant concerns, especially in education and safetycritical domains. Numerous AIGC detectors have been developed and evaluated on natural language data. However, their performance on code-related content generated by ChatGPT remains unexplored. To fill this gap, in this paper, we present the first empirical study on evaluating existing AIGC detectors in the software domain. We created a comprehensive dataset including 492.5K samples comprising code-related content produced by ChatGPT, encompassing popular software activities like Q&A (115K), code summarization (126K), and code generation (226.5K). We evaluated six AIGC detectors, including three commercial and three open-source solutions, assessing their performance on this dataset. Additionally, we conducted a human study to understand human detection capabilities and compare them with the existing AIGC detectors. Our results indicate that AIGC detectors demonstrate lower performance on code-related data compared to natural language data. Fine-tuning can enhance detector performance, especially for content within the same domain; but generalization remains a challenge. The human evaluation reveals that detection by humans is quite challenging

    CommitBART: A Large Pre-trained Model for GitHub Commits

    Full text link
    GitHub commits, which record the code changes with natural language messages for description, play a critical role for software developers to comprehend the software evolution. To promote the development of the open-source software community, we collect a commit benchmark including over 7.99 million commits across 7 programming languages. Based on this benchmark, we present CommitBART, a large pre-trained encoder-decoder Transformer model for GitHub commits. The model is pre-trained by three categories (i.e., denoising objectives, cross-modal generation and contrastive learning) for six pre-training tasks to learn commit fragment representations. Furthermore, we unify a ``commit intelligence'' framework with one understanding task and three generation tasks for commits. The comprehensive experiments on these tasks demonstrate that CommitBARTsignificantly outperforms previous pre-trained works for code. Further analysis also reveals each pre-training task enhances the model performance

    Toward a More PERMA(nent) Conceptualization of Worker Well-Being? A Cross-Cultural Study of the Workplace PERMA Profiler

    Get PDF
    We examined the factor structure of the recently developed worker well-being measure the Workplace PERMA Profiler and relationships between PERMA dimensions (i.e., positive emotions, engagement, positive relationships, meaning, accomplishment) and job performance (viz., task performance, organizational citizenship behaviors benefiting individuals and the organization at large). The measure exhibited metric (i.e., weak) invariance across samples of participants from the U.S. (N = 284) and China (N = 420). Additionally, for participants who responded to both the Workplace PERMA Profiler and the performance measures, there was a general pattern of positive PERMA–performance relationships across both samples (NU.S. = 147; NChina = 202). Overall, the Workplace PERMA Profiler may have problematic psychometric properties and item wordings and thus would benefit from further refinement

    ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning

    Full text link
    Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned widespread attention from both academia and industry. Attributed to the superior ability in code representation, they have been further applied in multiple downstream tasks such as clone detection, code search and code translation. However, it is also observed that these state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance of these pre-trained models drops significantly with simple perturbations such as renaming variable names. This weakness may be inherited by their downstream models and thereby amplified at an unprecedented scale. To this end, we propose an approach namely ContraBERT that aims to improve the robustness of pre-trained models via contrastive learning. Specifically, we design nine kinds of simple and complex data augmentation operators on the programming language (PL) and natural language (NL) data to construct different variants. Furthermore, we continue to train the existing pre-trained models by masked language modeling (MLM) and contrastive pre-training task on the original samples with their augmented variants to enhance the robustness of the model. The extensive experiments demonstrate that ContraBERT can effectively improve the robustness of the existing pre-trained models. Further study also confirms that these robustness-enhanced models provide improvements as compared to original models over four popular downstream tasks
    • …
    corecore